57 research outputs found

    A class of fast exact Bayesian filters in dynamical models with jumps

    Full text link
    In this paper, we focus on the statistical filtering problem in dynamical models with jumps. When a particular application relies on physical properties which are modeled by linear and Gaussian probability density functions with jumps, an usualmethod consists in approximating the optimal Bayesian estimate (in the sense of the Minimum Mean Square Error (MMSE)) in a linear and Gaussian Jump Markov State Space System (JMSS). Practical solutions include algorithms based on numerical approximations or based on Sequential Monte Carlo (SMC) methods. In this paper, we propose a class of alternative methods which consists in building statistical models which share the same physical properties of interest but in which the computation of the optimal MMSE estimate can be done at a computational cost which is linear in the number of observations.Comment: 21 pages, 7 figure

    Independent Resampling Sequential Monte Carlo Algorithms

    Full text link
    Sequential Monte Carlo algorithms, or Particle Filters, are Bayesian filtering algorithms which propagate in time a discrete and random approximation of the a posteriori distribution of interest. Such algorithms are based on Importance Sampling with a bootstrap resampling step which aims at struggling against weights degeneracy. However, in some situations (informative measurements, high dimensional model), the resampling step can prove inefficient. In this paper, we revisit the fundamental resampling mechanism which leads us back to Rubin's static resampling mechanism. We propose an alternative rejuvenation scheme in which the resampled particles share the same marginal distribution as in the classical setup, but are now independent. This set of independent particles provides a new alternative to compute a moment of the target distribution and the resulting estimate is analyzed through a CLT. We next adapt our results to the dynamic case and propose a particle filtering algorithm based on independent resampling. This algorithm can be seen as a particular auxiliary particle filter algorithm with a relevant choice of the first-stage weights and instrumental distributions. Finally we validate our results via simulations which carefully take into account the computational budget

    Semi-independent resampling for particle filtering

    Full text link
    Among Sequential Monte Carlo (SMC) methods,Sampling Importance Resampling (SIR) algorithms are based on Importance Sampling (IS) and on some resampling-based)rejuvenation algorithm which aims at fighting against weight degeneracy. However %whichever the resampling technique used this mechanism tends to be insufficient when applied to informative or high-dimensional models. In this paper we revisit the rejuvenation mechanism and propose a class of parameterized SIR-based solutions which enable to adjust the tradeoff between computational cost and statistical performances

    Binary classification based Monte Carlo simulation

    Full text link
    Acceptance-rejection (AR), Independent Metropolis Hastings (IMH) or importance sampling (IS) Monte Carlo (MC) simulation algorithms all involve computing ratios of probability density functions (pdfs). On the other hand, classifiers discriminate labeled samples produced by a mixture of two distributions and can be used for approximating the ratio of the two corresponding pdfs.This bridge between simulation and classification enables us to propose pdf-free versions of pdf-ratio-based simulation algorithms, where the ratio is replaced by a surrogate function computed via a classifier. From a probabilistic modeling perspective, our procedure involves a structured energy based model which can easily be trained and is compatible with the classical samplers

    Comparing the modeling powers of RNN and HMM

    Get PDF
    International audienceRecurrent Neural Networks (RNN) and Hidden Markov Models (HMM) are popular models for processing sequential data and have found many applications such as speech recognition, time series prediction or machine translation. Although both models have been extended in several ways (eg. Long Short Term Memory and Gated Recurrent Unit architec-tures, Variational RNN, partially observed Markov models.. .), their theoretical understanding remains partially open. In this context, our approach consists in classifying both models from an information geometry point of view. More precisely, both models can be used for modeling the distribution of a sequence of random observations from a set of latent variables; however, in RNN, the latent variable is deterministically deduced from the current observation and the previous latent variable, while, in HMM, the set of (random) latent variables is a Markov chain. In this paper, we first embed these two generative models into a generative unified model (GUM). We next consider the subclass of GUM models which yield a stationary Gaussian observations probability distribution function (pdf). Such pdf are characterized by their covariance sequence; we show that the GUM model can produce any stationary Gaussian distribution with geometrical covariance structure. We finally discuss about the modeling power of the HMM and RNN submodels, via their associated observations pdf: some observations pdf can be modeled by a RNN, but not by an HMM, and vice versa; some can be produced by both structures, up to a re-parameterization

    Modèles de Markov Triplet et filtrage de Kalman

    Get PDF
    International audienceKalman filtering enables to estimate a multivariate unobservable process x={xn}n∈Nx = \{x_n\}_{n \in N} from an observed multivariate process y={yn}n∈Ny = \{y_n\}_{n \in N}. It admits a lot of applications, in particular in signal processing. In its classical framework, it is based on a dynamic stochastic model in which x satisfies a linear evolution equation and the conditional law of yy given xx is given by the laws p(yn∣xn)p(y_n|x_n). In this Note, we propose two successive generalizations of the classical model. The first one, which leads to the "Pairwise" model, consists in assuming that the evolution equation of xx is indeed satisfied by the pair (x,y)(x, y). We show that the new model is strictly more general than the classical one, and yet still enables Kalman-like filtering. The second one, which leads to the "Triplet" model, consists in assuming that the evolution equation of xx is satisfied by a triplet (x,r,y)(x, r, y), in which r={rn}n∈Nr = \{r_n\}_{n \in N} is an (artificial) auxiliary process. We show that the Triplet model is strictly more general than the Pairwise one, and yet still enables Kalman filtering.Le filtrage de Kalman permet d'estimer un processus multivarié inobservable x={xn}n∈Nx = \{x_n\}_{n \in N} à partir d'un processus multivarié observé y={yn}n∈Ny = \{y_n\}_{n \in N}. Cet outil admet de multiples applications, en particulier en traitement du signal. Dans sa formulation classique, il s'appuie sur un modèle stochastique dynamique dans lequel x vérifie une équation d'évolution linéaire et la loi de yy conditionnelle à xx est donnée par les lois p(yn∣xn)p(y_n|x_n). Nous proposons dans cette Note deux généralisations successives du modèle classique. La première, qui mène au modèle dit "Couple", consiste à supposer que l'équation d'évolution de xx est en fait vérifiée par le couple (x,y)(x, y). Nous montrons que le nouveau modèle est strictement plus général que le modèle classique et qu'il peut, néanmoins, servir de support à la mise en place d'un filtrage de type Kalman. La deuxième, qui mène au modèle dit "Triplet", consiste à supposer que l'équation d'état est vérifiée par un triplet (x,r,y)(x, r, y), où r={rn}n∈Nr = \{r_n\}_{n \in N} est un processus auxiliaire, éventuellement sans existence physique. Nous montrons que le modèle Triplet est strictement plus général que le modèle Couple, et permet encore la mise en place du filtrage de Kalma

    ENTROPY COMPUTATION IN PARTIALLY OBSERVED MARKOV CHAINS

    No full text
    Abstract. Let X = {Xn}n∈IN be a hidden process and Y = {Yn}n∈IN be an observed process. We assume that (X,Y) is a (pairwise) Markov Chain (PMC). PMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient parameter estimation and Bayesian restoration algorithms. In this paper we propose a fast (i.e., O(N)) algorithm for computing the entropy of {Xn} N n=0 given an observation sequence {yn} N n=0

    Bayesian conditional Monte Carlo Algorithm for nonlinear time-series state estimation

    No full text
    International audienceBayesian filtering aims at estimating sequentially a hidden process from an observed one. In particular, sequential Monte Carlo (SMC) techniques propagate in time weighted trajectories which represent the posterior probability density function (pdf) of the hidden process given the available observations. On the other hand, conditional Monte Carlo (CMC) is a variance reduction technique which replaces the estimator of a moment of interest by its conditional expectation given another variable. In this paper, we show that up to some adaptations, one can make use of the time recursive nature of SMC algorithms in order to propose natural temporal CMC estimators of some point estimates of the hidden process, which outperform the associated crude Monte Carlo (MC) estimator whatever the number of samples. We next show that our Bayesian CMC estimators can be computed exactly, or approximated efficiently, in some hidden Markov chain (HMC) models; in some jump Markov state-space systems (JMSS); as well as in multitarget filtering. Finally our algorithms are validated via simulation
    • …
    corecore